Goto

Collaborating Authors

 ludwig model


How to Run Inference on Ludwig Models Using TorchScript - Predibase - Predibase

#artificialintelligence

In Ludwig 0.6, we have introduced the ability to export Ludwig models into TorchScript, making it easier than ever to deploy models for highly performant model inference. In this blog post, we will describe the benefits of serving models using TorchScript and demonstrate how to train, export, and use the exported models on an example dataset. A common way to serve machine learning models is wrapping them in REST APIs and exposing their endpoints. This works great if you do not have particularly strict SLA requirements or if backwards compatibility is not a concern. However, if you need to serve a model in a production environment, you will likely need to use a more robust solution.